Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Instruction tuning, a new learning paradigm that fine-tunes pre-trained language models on tasks specified through instructions, has shown promising zero-shot performance on various natural language processing tasks. However, it's still not explored for vision and multimodal tasks. In this work, we introduce MultiInstruct, the first multimodal instruction tuning benchmark dataset that consists of 47 diverse multimodal tasks covering 11 broad categories. Each task is designed at least with 5,000 instances (input-out pairs) from existing open-source datasets and 5 expert-written instructions. We take OFA as the base pre-trained model for multimodal instruction tuning, and to improve its performance, we explore multiple transfer learning strategies to leverage the large-scale Natural Instructions dataset. Experimental results demonstrate its strong zero-shot performance on various unseen multimodal tasks and the benefit of transfer learning from text-only instructions. We also design a new evaluation metric: Sensitivity, to evaluate how sensitive the model is to the variety of instructions. Our results indicate that the model is less sensitive to the varying instructions after finetuning on a diverse set of tasks and instructions for each task.
translated by 谷歌翻译
Surgery is the only viable treatment for cataract patients with visual acuity (VA) impairment. Clinically, to assess the necessity of cataract surgery, accurately predicting postoperative VA before surgery by analyzing multi-view optical coherence tomography (OCT) images is crucially needed. Unfortunately, due to complicated fundus conditions, determining postoperative VA remains difficult for medical experts. Deep learning methods for this problem were developed in recent years. Although effective, these methods still face several issues, such as not efficiently exploring potential relations between multi-view OCT images, neglecting the key role of clinical prior knowledge (e.g., preoperative VA value), and using only regression-based metrics which are lacking reference. In this paper, we propose a novel Cross-token Transformer Network (CTT-Net) for postoperative VA prediction by analyzing both the multi-view OCT images and preoperative VA. To effectively fuse multi-view features of OCT images, we develop cross-token attention that could restrict redundant/unnecessary attention flow. Further, we utilize the preoperative VA value to provide more information for postoperative VA prediction and facilitate fusion between views. Moreover, we design an auxiliary classification loss to improve model performance and assess VA recovery more sufficiently, avoiding the limitation by only using the regression metrics. To evaluate CTT-Net, we build a multi-view OCT image dataset collected from our collaborative hospital. A set of extensive experiments validate the effectiveness of our model compared to existing methods in various metrics. Code is available at: https://github.com/wjh892521292/Cataract OCT.
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
The activity of the grid cell population in the medial entorhinal cortex (MEC) of the mammalian brain forms a vector representation of the self-position of the animal. Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal. In doing so, the grid cell system effectively performs path integration. In this paper, we investigate the algebraic, geometric, and topological properties of grid cells using recurrent network models. Algebraically, we study the Lie group and Lie algebra of the recurrent transformation as a representation of self-motion. Geometrically, we study the conformal isometry of the Lie group representation where the local displacement of the activity vector in the neural space is proportional to the local displacement of the agent in the 2D physical space. Topologically, the compact abelian Lie group representation automatically leads to the torus topology commonly assumed and observed in neuroscience. We then focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells. Our numerical experiments show that conformal isometry leads to hexagon periodic patterns in the grid cell responses and our model is capable of accurate path integration. Code is available at \url{https://github.com/DehongXu/grid-cell-rnn}.
translated by 谷歌翻译
现在,合成视觉媒体发电和操纵的加速增长已经达到了引起重大关注并对社会造成巨大恐吓的地步。当务之急需要自动检测网络涉及虚假数字内容,并避免危险人造信息的传播以应对这种威胁。在本文中,我们利用和比较了两种手工制作的功能(Sift和Hog)以及两种深层特征(Xpection和CNN+RNN),以进行深层捕获检测任务。当训练集和测试集之间存在不匹配时,我们还会检查这些功能的性能。评估是对著名的FaceForensics ++数据集进行的,该数据集包含四个子数据集,深盘,face2face,faceswap和neuralTextures。最好的结果来自Xception,当训练和测试集都来自同一子数据库时,精度可能会超过99 \%。相比之下,当训练集不匹配测试集时,结果急剧下降。这种现象揭示了创建通用深击检测系统的挑战。
translated by 谷歌翻译
来自单眼图像的3D对象检测是计算机视觉的具有挑战性且长期存在的问题。为了从不同的角度组合信息而没有麻烦的2D实例跟踪,最近的方法倾向于通过在空间中密集的常规3D网格进行采样,这是效率低下的多视图。在本文中,我们试图通过提出可学习的关键点采样方法来改善多视图特征聚合,该方法将伪表面点散布在3D空间中,以保持数据稀疏性。然后使用多视图几何约束和视觉特征增强的分散点来推断场景中的对象位置和形状。为了明确地弥补单帧和模型多视图几何形状的局限性,我们进一步提出了一个表面滤波器模块以抑制噪声。实验结果表明,就3D检测而言,我们的方法的性能明显优于以前的作品(在某些类别的扫描仪上改善了0.1 AP)。该代码将公开可用。
translated by 谷歌翻译
基于DNN的视频对象检测(VOD)为自动驾驶和视频监视行业提供了重要的重要性和有希望的机会。但是,由于其实用性,可行性和强大的攻击效果,对抗贴片攻击在现场视觉任务中产生了巨大的关注。这项工作提出了Themis,这是一种软件/硬件系统,可防止对抗贴片,以实时稳健的视频对象检测。我们观察到,对抗斑块在具有非稳定预测的小区域中表现出极为局部的表面特征,因此提出了对抗区域检测算法,以消除对抗性效应。Themis还提出了一种系统的设计,以通过消除冗余计算和记忆运输来有效地支持该算法。实验结果表明,提出的方法可以有效地从可忽略的硬件开销中从对抗性攻击中恢复系统。
translated by 谷歌翻译
近年来,用深击的图像和视频操纵已成为安全和社会的严重关注。因此,已经提出了许多检测模型和数据库来可靠地检测DeepFake数据。但是,人们越来越担心这些模型和培训数据库可能会有偏见,从而导致深泡检测器失败。在这项工作中,我们通过(a)为五个流行的DeepFake数据集提供41个不同属性的大规模人口统计学和非人口统计学注释,以及(b)全面分析多个最先进的ART的AI偏见这些数据库上的DeepFake检测模型。调查分析了各种独特属性(从6500万标签)对检测性能的影响,包括人口统计学(年龄,性别,种族)和非人口统计学(头发,皮肤,配件等)信息。结果表明,研究的数据库缺乏多样性,更重要的是表明,使用的深层检测模型对许多研究的属性有很大偏见。此外,结果表明,模型的决策可能基于几个可疑(偏见)的假设,例如,如果一个人在微笑或戴上帽子。根据这种深泡检测方法的应用,这些偏见可能导致普遍性,公平性和安全性问题。我们希望这项研究的发现和注释数据库将有助于评估和减轻未来深层检测技术的偏见。我们的注释数据集可公开使用。
translated by 谷歌翻译
本地图像功能匹配,旨在识别图像对的识别和相应的相似区域,是计算机视觉中的重要概念。大多数现有的图像匹配方法遵循一对一的分配原则,并采用共同最近的邻居来确保跨图像之间本地特征之间的独特对应关系。但是,来自不同条件的图像可能会容纳大规模变化或观点多样性,以便一对一的分配可能在密集匹配中导致模棱两可或丢失的表示形式。在本文中,我们介绍了一种新颖的无探测器本地特征匹配方法Adamatcher,该方法首先通过轻巧的特征交互模块与密集的特征相关联,并估算了配对图像的可见面积,然后执行贴片级多到 - 一个分配可以预测匹配建议,并最终根据一对一的完善模块进行完善。广泛的实验表明,Adamatcher的表现优于固体基线,并在许多下游任务上实现最先进的结果。此外,多对一分配和一对一的完善模块可以用作其他匹配方法(例如Superglue)的改进网络,以进一步提高其性能。代码将在出版时提供。
translated by 谷歌翻译